许多现有的模仿学习数据集都是从多个演示者那里收集的,每个示威者在环境的不同部分都有不同的专业知识。然而,标准模仿学习算法通常将所有示威者视为同质的,无论其专业知识如何,都会吸收任何次优示威者的弱点。在这项工作中,我们表明,对演示者专业知识的无监督学习可以导致模仿学习算法的性能一致。我们在示威者的学习政策和专业知识水平上开发并优化了联合模型。这使我们的模型能够从最佳行为中学习,并过滤每个演示者的次优行为。我们的模型学会了一项单一的政策,即使是最好的演示者,也可以用来估计任何州的任何演示者的专业知识。我们说明了我们从机器人和离散环境(例如Minigrid和国际象棋)的真实性持续控制任务的发现,以21美元的价格出售$ 23 $设置,平均价格为$ 7 \%\%,最高$ 60 \%\% $根据最终奖励的改进。
translated by 谷歌翻译
虽然多代理学习的进步使得能够培训越来越复杂的代理商,但大多数现有技术都产生了最终政策,该政策不旨在适应新的合作伙伴的战略。但是,我们希望我们的AI代理商根据周围的战略来调整他们的战略。在这项工作中,我们研究了有条件的多代理模仿学习问题,我们可以在培训时间访问联合轨迹演示,我们必须在测试时间与新合作伙伴进行互动并适应新伙伴。这种环境是具有挑战性的,因为我们必须推断新的合作伙伴的战略并使我们的政策适应该战略,而不是了解环境奖励或动态。我们将该条件多代理模仿学习的问题正式化,提出了一种解决可扩展性和数据稀缺的困难的新方法。我们的主要洞察力是,多种代理游戏的合作伙伴的变化通常很高,并且可以通过低秩子空间来表示。利用张量分解的工具,我们的模型在EGO和合作伙伴代理战略上学习了低秩子空间,然后是infers并通过插值在子空间中互动到新的合作伙伴策略。我们用混合协作任务的实验,包括匪徒,粒子和Hanabi环境。此外,我们还测试我们对超级烹饪游戏的用户学习中的真实人体合作​​伙伴的条件政策。与基线相比,我们的模型更好地适应新的合作伙伴,并强大地处理各种设置,从离散/持续的动作和静态/在线评估与AI / Lean Partners。
translated by 谷歌翻译
我们提出了一个用于动态培训互动的多层强化学习软件包,如循环,适应性和临时培训。我们的软件包围绕灵活的代理对象设计,可以轻松配置为支持不同的培训交互,并用混合奖励和n代理处理完全一般的多级环境。我们的包装基于StablyBaseLines3,我们的包装直接与现有强大的Deep RL算法一起使用。最后,Pantheonrl附带直观但功能的Web用户界面,用于配置实验并启动多个异步作业。我们的包裹可以在https://github.com/stanford-iliad/pantheonrl找到。
translated by 谷歌翻译
概率电路(PC)是一系列生成模型,其允许计算其概率分布的精确似然和边际。 PC既具有富有态度和易行的,也可以作为离散密度估计任务的流行选择。然而,大型PC易受过度装备的影响,并且已经探讨了少数正规化策略(例如,辍学,重量衰减)。我们提出了Hyperspns:使用小型神经网络产生大型PCS混合重量的新范式。我们的框架可以被视为软权重共享策略,它结合了大型模型的更大的表现力,具有更好的小型模型的概括和内存占用特性。我们展示了在最近的文献 - 大鼠SPN和einets中引入的两家最先进的PC系列的正常化战略的优点 - 并在离散的估算基准中的一套密度估计基准上展示两种模型的泛化改进连续域。
translated by 谷歌翻译
AI正在经历范式转变,随着模型的兴起(例如Bert,Dall-E,GPT-3),这些模型经过大规模的数据训练,并且可以适应广泛的下游任务。我们称这些模型基础模型来强调其至关重要但不完整的特征。该报告提供了基础模型的机会和风险的详尽说明,包括其功能(例如语言,愿景,机器人技术,推理,人类互动)和技术原则(例如,模型架构,培训程序,数据,系统,安全,安全性,评估,理论)对其应用(例如法律,医疗保健,教育)和社会影响(例如不平等,滥用,经济和环境影响,法律和道德考虑)。尽管基础模型基于标准的深度学习和转移学习,但它们的规模导致了新的新兴能力,以及它们在许多任务中的有效性都激发了同质化。同质化提供了强大的杠杆作用,但要求谨慎,因为基础模型的缺陷均由下游的所有适应模型继承。尽管即将广泛地部署基础模型,但我们目前对它们的工作方式,失败以及由于其新兴属性的影响而缺乏清晰的了解。为了解决这些问题,我们认为基础模型的许多批判性研究都需要与他们的基本社会技术性质相称。
translated by 谷歌翻译
This paper describes a Relevance-Zone pattern table (RZT) that can be used to replace a traditional transposition table. An RZT stores exact game values for patterns that are discovered during a Relevance-Zone-Based Search (RZS), which is the current state-of-the-art in solving L&D problems in Go. Positions that share the same pattern can reuse the same exact game value in the RZT. The pattern matching scheme for RZTs is implemented using a radix tree, taking into consideration patterns with different shapes. To improve the efficiency of table lookups, we designed a heuristic that prevents redundant lookups. The heuristic can safely skip previously queried patterns for a given position, reducing the overhead to 10% of the original cost. We also analyze the time complexity of the RZT both theoretically and empirically. Experiments show the overhead of traversing the radix tree in practice during lookup remain flat logarithmically in relation to the number of entries stored in the table. Experiments also show that the use of an RZT instead of a traditional transposition table significantly reduces the number of searched nodes on two data sets of 7x7 and 19x19 L&D Go problems.
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
translated by 谷歌翻译
Humans form mental images of 3D scenes to support counterfactual imagination, planning, and motor control. Our abilities to predict the appearance and affordance of the scene from previously unobserved viewpoints aid us in performing manipulation tasks (e.g., 6-DoF kitting) with a level of ease that is currently out of reach for existing robot learning frameworks. In this work, we aim to build artificial systems that can analogously plan actions on top of imagined images. To this end, we introduce Mental Imagery for Robotic Affordances (MIRA), an action reasoning framework that optimizes actions with novel-view synthesis and affordance prediction in the loop. Given a set of 2D RGB images, MIRA builds a consistent 3D scene representation, through which we synthesize novel orthographic views amenable to pixel-wise affordances prediction for action optimization. We illustrate how this optimization process enables us to generalize to unseen out-of-plane rotations for 6-DoF robotic manipulation tasks given a limited number of demonstrations, paving the way toward machines that autonomously learn to understand the world around them for planning actions.
translated by 谷歌翻译
Parameter-efficient methods (like Prompt or Adapters) for adapting pre-trained language models to downstream tasks have been popular recently. However, hindrances still prevent these methods from reaching their full potential. For example, two significant challenges are few-shot adaptation and cross-task generalization ability. To tackle these issues, we propose a general framework to enhance the few-shot adaptation and cross-domain generalization ability of parameter-efficient methods. In our framework, we prime the self-supervised model for parameter-efficient methods to rapidly adapt to various downstream few-shot tasks. To evaluate the authentic generalization ability of these parameter-efficient methods, we conduct experiments on a few-shot cross-domain benchmark containing 160 diverse NLP tasks. The experiment result reveals that priming by tuning PLM only with extra training tasks leads to the best performance. Also, we perform a comprehensive analysis of various parameter-efficient methods under few-shot cross-domain scenarios.
translated by 谷歌翻译